Crafting Quality Product: Modeling Injection Molding Outcomes with Logistic Analysis.
This project investigates how injection molding process factors affect the quality of plastic road lenses in a dataset with 1451 samples, 13 process variables, and a quality label, guided by the idea that “quality is when the customer comes back, not the product.” Quality is divided into four classes based on general uniformity \(U_0\). Class 3 (“Target”) is the best outcome, with \(0.45 \le U_0 \le 0.5\), and Class 2 (“Acceptable”) meets the minimum standard with \(0.4 \le U_0 < 0.45\). Class 1 (“Waste”) has \(U_0 < 0.4\) and represents defective lenses, while Class 4 (“Inefficient”) has \(U_0 > 0.5\) and produces more uniformity than required, using extra resources without added benefit. For the purposes of modeling, Classes 1 and 4 are combined into a single “bad” category, and Classes 2 and 3 are treated as satisfactory “good” quality.
Key questions guiding the work are:
Which injection molding variables differ most across the four quality classes (Waste, Acceptable, Target, Inefficient) ?
How do process conditions contrast between “Good” lenses (quality levels 2 and 3) and “Bad” lenses (levels 1 and 4) ?
How do two logistic regression models, a full model with all process variables and a reduced model with only significant predictors, compare in their ability to classify lenses as “Good” (levels 2 and 3) versus “Bad” (levels 1 and 4) and to correctly distinguish among the four quality classes in a multinomial setting?
| melt_temp | mold_temp | fill_time | plast_time | |
|---|---|---|---|---|
| Min. : 81.75 | Min. :78.41 | Min. : 6.084 | Min. :2.780 | |
| 1st Qu.:105.91 | 1st Qu.:81.12 | 1st Qu.: 6.292 | 1st Qu.:3.000 | |
| Median :106.09 | Median :81.33 | Median : 6.968 | Median :3.193 | |
| Mean :106.89 | Mean :81.33 | Mean : 7.459 | Mean :3.234 | |
| 3rd Qu.:106.26 | 3rd Qu.:81.44 | 3rd Qu.: 7.124 | 3rd Qu.:3.290 | |
| Max. :155.03 | Max. :82.16 | Max. :11.232 | Max. :6.610 |
| cycle_time | close_force | clamp_peak | torque_peak | |
|---|---|---|---|---|
| Min. :74.78 | Min. :876.7 | Min. :894.8 | Min. : 94.2 | |
| 1st Qu.:74.82 | 1st Qu.:893.6 | 1st Qu.:914.4 | 1st Qu.:114.2 | |
| Median :74.83 | Median :902.4 | Median :918.8 | Median :116.9 | |
| Mean :75.22 | Mean :902.0 | Mean :919.4 | Mean :116.7 | |
| 3rd Qu.:75.65 | 3rd Qu.:909.4 | 3rd Qu.:926.3 | 3rd Qu.:120.2 | |
| Max. :75.79 | Max. :930.6 | Max. :946.5 | Max. :130.3 |
The four quality classes define an ordered scale of performance for the road lenses, from clearly defective to optimal but excessively uniform. Waste (Class 1, 370 samples) includes parts U0 < 0.4, which fail the UNI EN 13201 threshold and must be discarded as non conforming, while Acceptable (Class 2, 406 samples) covers lenses with 0.4 ≤ U0 < 0.45 that satisfy the external road lighting standard but do not reach the company’s internal target band. Target (Class 3, 310 samples) consists of lenses with 0.45 ≤ U0 ≤ 0.5, representing the preferred outcome because they both meet the standard and achieve the desired quality level without wasting material or machine effort. Inefficient (Class 4, 360 samples) includes lenses with U0 > 0.5, which technically exceed the minimum requirement but are undesirable because they consume extra resources that do not translate into additional customer value.
Taken together, these labels form a natural ordinal quality scale aligned with increasing uniformity, using the numeric codes 1 to 4 to reflect progressively higher U0 values.Moving from Class 1 to Class 3 corresponds to genuine improvements in compliance and perceived quality, while moving further to Class 4 increases uniformity but reduces economic efficiency, so Class 3 is considered the true optimum.
Here is a clean, easy-to-copy version of your multinomial logit model and interpretation.
Model equation:
For \(K\) unordered categories with category \(K\) as the baseline:
\[ \log \left( \frac{P(Y = k)}{P(Y = K)} \right) = \beta_{0k} + \beta_{1k} X_1 + \cdots + \beta_{pk} X_p, \quad k = 1, \dots, K-1 \]
Class 2 vs Class 1:
\[ \log \left( \frac{P(Y = 2)}{P(Y = 1)} \right) = \beta_{02} + \beta_{12} X_1 + \beta_{22} X_2 + \cdots + \beta_{p2} X_p \]
Class 3 vs Class 1:
\[ \log \left( \frac{P(Y = 3)}{P(Y = 1)} \right) = \beta_{03} + \beta_{13} X_1 + \beta_{23} X_2 + \cdots + \beta_{p3} X_p \]
Class 4 vs Class 1:
\[ \log \left( \frac{P(Y = 4)}{P(Y = 1)} \right) = \beta_{04} + \beta_{14} X_1 + \beta_{24} X_2 + \cdots + \beta_{p4} X_p \]
Accuracy measures the overall percent of correct classifications, combining both correctly predicted positives and negatives across all classes.
Sensitivity, or recall, for a given class measures the proportion of true members of that class that the model correctly identifies as belonging to it, focusing on how well the model captures positives without missing them.
Specificity for a class measures the proportion of non members that the model correctly identifies as not belonging to that class, emphasizing how well the model avoids false alarms among observations that truly are not in that class.
The data is split into training (1016 samples) and test (435 samples) sets.
In the training set, there are 370 Waste, 406 Acceptable, 310 Target, and 360 Inefficient lenses. In the test set, there are 111 Waste, 134 Acceptable, 92 Target, and 98 Inefficient lenses.
The training set is used to teach the model, and the test set is used to check its predictions.
Distribution of Quality based on Training Data
| Quality | Frequency |
|---|---|
| 1 | 259 |
| 2 | 285 |
| 3 | 217 |
| 4 | 256 |
Distribution of Quality based on Test Data
| Quality | Frequency |
|---|---|
| 1 | 111 |
| 2 | 121 |
| 3 | 93 |
| 4 | 109 |
Confusion Matrix and Statistics
Reference
Prediction 1 2 3 4
1 81 25 3 0
2 30 94 0 0
3 0 2 78 3
4 0 0 12 106
Overall Statistics
Accuracy : 0.8272
95% CI : (0.7883, 0.8616)
No Information Rate : 0.2788
P-Value [Acc > NIR] : < 2.2e-16
Kappa : 0.7686
Mcnemar's Test P-Value : NA
Statistics by Class:
Class: 1 Class: 2 Class: 3 Class: 4
Sensitivity 0.7297 0.7769 0.8387 0.9725
Specificity 0.9133 0.9042 0.9853 0.9631
Pos Pred Value 0.7431 0.7581 0.9398 0.8983
Neg Pred Value 0.9077 0.9129 0.9573 0.9905
Prevalence 0.2558 0.2788 0.2143 0.2512
Detection Rate 0.1866 0.2166 0.1797 0.2442
Detection Prevalence 0.2512 0.2857 0.1912 0.2719
Balanced Accuracy 0.8215 0.8405 0.9120 0.9678
Call:
multinom(formula = form_reduced, data = train, trace = FALSE)
Coefficients:
(Intercept) melt_temp mold_temp fill_time plast_time cycle_time close_force
2 -14.573262 0.02155145 1.675962 0.1334476 -2.714362 0.342046 0.04218710
3 -6.108094 0.29042294 -3.664533 1.7372107 -6.851274 24.328017 0.08899520
4 -3.253354 0.37062158 -1.915685 1.1254329 -21.620003 28.543736 0.08360785
clamp_peak back_press inj_press screw_pos shot_vol
2 0.04412199 -0.5650042 -0.03429048 4.656245 -7.851402
3 -0.01439963 -0.6440426 -0.28113309 -23.215866 -56.948835
4 -0.01619850 -0.8317595 -0.51602337 -4.811632 -75.152313
Std. Errors:
(Intercept) melt_temp mold_temp fill_time plast_time cycle_time close_force
2 0.001389570 0.04748373 0.1726180 0.03087333 0.01292875 0.1137965 0.009915953
3 0.002183648 0.03664525 0.3093193 0.07220058 0.04862119 0.1815878 0.133630285
4 0.002013340 0.03783932 0.2816574 0.06779269 0.04265489 0.1605781 0.167077159
clamp_peak back_press inj_press screw_pos shot_vol
2 0.009914638 0.1374948 0.007433183 0.012120182 0.02739205
3 0.126272092 0.1792256 0.025927164 0.002685573 0.05869166
4 0.159636027 0.1543695 0.028928349 0.002236911 0.05370142
Residual Deviance: 687.3222
AIC: 759.3222
Confusion Matrix and Statistics
Reference
Prediction 1 2 3 4
1 82 22 3 0
2 28 97 0 0
3 1 2 78 3
4 0 0 12 106
Overall Statistics
Accuracy : 0.8364
95% CI : (0.7982, 0.87)
No Information Rate : 0.2788
P-Value [Acc > NIR] : < 2.2e-16
Kappa : 0.781
Mcnemar's Test P-Value : NA
Statistics by Class:
Class: 1 Class: 2 Class: 3 Class: 4
Sensitivity 0.7387 0.8017 0.8387 0.9725
Specificity 0.9226 0.9105 0.9824 0.9631
Pos Pred Value 0.7664 0.7760 0.9286 0.8983
Neg Pred Value 0.9113 0.9223 0.9571 0.9905
Prevalence 0.2558 0.2788 0.2143 0.2512
Detection Rate 0.1889 0.2235 0.1797 0.2442
Detection Prevalence 0.2465 0.2880 0.1935 0.2719
Balanced Accuracy 0.8307 0.8561 0.9106 0.9678
| Test | G2 | df | p_value |
|---|---|---|---|
| Likelihood Ratio (Null vs Full) | 2122.977 | 33 | 0 |
fitting null model for pseudo-r2
llh llhNull G2 McFadden r2ML
-343.6610866 -1405.1497709 2122.9773686 0.7554274 0.8760020
r2CU
0.9349824
Below, we see the ROC Curves for each class of Quality.
| Class | AUC |
|---|---|
| 1 | 0.933 |
| 2 | 0.938 |
| 3 | 0.959 |
| 4 | 0.981 |
| Macro-Avg | 0.953 |
---
title: "Crafting Quality Products"
author: "Saffan"
output:
flexdashboard::flex_dashboard:
theme:
version: 4
bootswatch: default
navbar-bg: "#173767"
orientation: columns
vertical_layout: fill
source_code: embed
---
```{r}
if (!require(pacman)) install.packages("pacman")
pacman::p_load(caret, flexdashboard, knitr, tidyverse, janitor, DT, GGally, corrplot, nnet, pROC, pscl, plotly)
#Read and prepare project data
df <- read.csv(
"Project data set.csv",
sep = ";",
header = TRUE,
check.names = FALSE
) |>
janitor::clean_names() |>
dplyr::rename(
melt_temp = melt_temperature,
mold_temp = mold_temperature,
fill_time = time_to_fill,
plast_time = z_dx_plasticizing_time,
cycle_time = z_ux_cycle_time,
close_force = s_kx_closing_force,
clamp_peak = s_ks_clamping_force_peak_value,
torque_peak = ms_torque_peak_value_current_cycle,
torque_mean = mm_torque_mean_value_current_cycle,
back_press = ap_ss_specific_back_pressure_peak_value,
inj_press = ap_vs_specific_injection_pressure_peak_value,
screw_pos = c_pn_screw_position_at_the_end_of_hold_pressure,
shot_vol = s_vo_shot_volume
)
#Outcome: 4-level quality + binary Good (2&3) vs Bad (1&4)
df$quality <- factor(df$quality)
#Numeric predictors: short names
num_vars <- c(
"melt_temp","mold_temp","fill_time","plast_time","cycle_time",
"close_force","clamp_peak","torque_peak","torque_mean",
"back_press","inj_press","screw_pos","shot_vol"
)
```
Introduction
=======================================================================
Column {data-width=400}
-----------------------------------------------------------------------
### Motivation
**Crafting Quality Product: Modeling Injection Molding Outcomes with Logistic Analysis.**
This project investigates how injection molding process factors affect the quality of plastic road lenses in a dataset with 1451 samples, 13 process variables, and a quality label, guided by the idea that “quality is when the customer comes back, not the product.” Quality is divided into four classes based on general uniformity \(U_0\). Class 3 (“Target”) is the best outcome, with \(0.45 \le U_0 \le 0.5\), and Class 2 (“Acceptable”) meets the minimum standard with \(0.4 \le U_0 < 0.45\). Class 1 (“Waste”) has \(U_0 < 0.4\) and represents defective lenses, while Class 4 (“Inefficient”) has \(U_0 > 0.5\) and produces more uniformity than required, using extra resources without added benefit. For the purposes of modeling, Classes 1 and 4 are combined into a single “bad” category, and Classes 2 and 3 are treated as satisfactory “good” quality.
### Research Question
Key questions guiding the work are:
Which injection molding variables differ most across the four quality classes (Waste, Acceptable, Target, Inefficient) ?
How do process conditions contrast between “Good” lenses (quality levels 2 and 3) and “Bad” lenses (levels 1 and 4) ?
How do two logistic regression models, a full model with all process variables and a reduced model with only significant predictors, compare in their ability to classify lenses as “Good” (levels 2 and 3) versus “Bad” (levels 1 and 4) and to correctly distinguish among the four quality classes in a multinomial setting?
Column {.tabset data-width=400}
-----------------------------------------------------------------------
### Dataset overview
```{r}
datatable(
df,
options = list(pageLength = 25, scrollX = TRUE),
caption = "All 1451 samples of the injection molding dataset"
)
```
EDA
=======================================================================
Column {.tabset data-width=500}
-----------------------------------------------------------------------
### Distribution
```{r}
p1 <- ggplot(df, aes(x = quality)) +
geom_bar(fill = "steelblue") +
labs(
title = "Quality Classes for Road Lenses",
x = "Quality Class",
y = "Number of Observations"
) +
theme_minimal() +
theme(
plot.title = element_text(size = 10, hjust = 0.5),
axis.title = element_text(size = 8),
axis.text = element_text(size = 7)
)
ggplotly(p1)
```
### Correlation
```{r}
library(GGally)
df |>
select(all_of(num_vars)) |>
cor(use = "pairwise.complete.obs") |>
corrplot::corrplot(method = "color", tl.cex = 0.6, tl.col = "black")
```
### Boxplot
```{r, fig.align='center', fig.height=10, fig.width=8}
df |>
select(quality, all_of(num_vars)) |>
pivot_longer(
cols = -quality,
names_to = "Variable",
values_to = "Value"
) |>
ggplot(aes(x = quality, y = Value)) +
geom_boxplot(
fill = "#4F81BD",
color = "black",
outlier.color = "black",
outlier.size = 0.8
) +
facet_wrap(~ Variable, scales = "free", ncol = 3) +
labs(
title = "Process Variables Across Four Quality Classes",
x = "Quality Class",
y = "Value"
) +
theme_minimal(base_size = 10) +
theme(
plot.title = element_text(hjust = 0.5, face = "bold"),
strip.text = element_text(size = 4, face = "bold"),
axis.text.x = element_text(hjust = 0.5,angle = 0)
)
```
### Summary
```{r Summary Statistics}
# 1. Get summary as a matrix
s <- summary(df[num_vars])
kable(s[, 1:4])
kable(s[, 5:8])
```
Column {data-width=500}
-----------------------------------------------------------------------
### Analysis.{data-height=800}
The four quality classes define an ordered scale of performance for the road lenses, from clearly defective to optimal but excessively uniform.
Waste ***(Class 1, 370 samples)*** includes parts U0 < 0.4, which fail the UNI EN 13201 threshold and must be discarded as non conforming, while Acceptable ***(Class 2, 406 samples)*** covers lenses with 0.4 ≤ U0 < 0.45 that satisfy the external road lighting standard but do not reach the company’s internal target band. Target ***(Class 3, 310 samples)*** consists of lenses with 0.45 ≤ U0 ≤ 0.5, representing the preferred outcome because they both meet the standard and achieve the desired quality level without wasting material or machine effort. Inefficient ***(Class 4, 360 samples)*** includes lenses with U0 > 0.5, which technically exceed the minimum requirement but are undesirable because they consume extra resources that do not translate into additional customer value.
Taken together, these labels form a natural ordinal quality scale aligned with increasing uniformity, using the numeric codes 1 to 4 to reflect progressively higher U0 values.***Moving from Class 1 to Class 3 corresponds to genuine improvements in compliance and perceived quality, while moving further to Class 4 increases uniformity but reduces economic efficiency, so Class 3 is considered the true optimum.***
Method
=======================================================================
Column {.tabset data-width=500}
-----------------------------------------------------------------------
### Conceptual Understanding.
Here is a clean, easy-to-copy version of your multinomial logit model and interpretation.
Model equation:
For \(K\) unordered categories with category \(K\) as the baseline:
\[
\log \left( \frac{P(Y = k)}{P(Y = K)} \right)
= \beta_{0k} + \beta_{1k} X_1 + \cdots + \beta_{pk} X_p,
\quad k = 1, \dots, K-1
\]
**Class 2 vs Class 1:**
\[
\log \left( \frac{P(Y = 2)}{P(Y = 1)} \right)
= \beta_{02} + \beta_{12} X_1 + \beta_{22} X_2 + \cdots + \beta_{p2} X_p
\]
**Class 3 vs Class 1:**
\[
\log \left( \frac{P(Y = 3)}{P(Y = 1)} \right)
= \beta_{03} + \beta_{13} X_1 + \beta_{23} X_2 + \cdots + \beta_{p3} X_p
\]
**Class 4 vs Class 1:**
\[
\log \left( \frac{P(Y = 4)}{P(Y = 1)} \right)
= \beta_{04} + \beta_{14} X_1 + \beta_{24} X_2 + \cdots + \beta_{p4} X_p
\]
### Classification performance metrics {data-height=800}
- Accuracy measures the overall percent of correct classifications, combining both correctly predicted positives and negatives across all classes.
- Sensitivity, or recall, for a given class measures the proportion of true members of that class that the model correctly identifies as belonging to it, focusing on how well the model captures positives without missing them.
- Specificity for a class measures the proportion of non members that the model correctly identifies as not belonging to that class, emphasizing how well the model avoids false alarms among observations that truly are not in that class.
Column {.tabset data-width=500}
-----------------------------------------------------------------------
### Data Prepration {data-height=800}
The data is split into training (1016 samples) and test (435 samples) sets.
In the training set, there are 370 Waste, 406 Acceptable, 310 Target, and 360 Inefficient lenses.
In the test set, there are 111 Waste, 134 Acceptable, 92 Target, and 98 Inefficient lenses.
The training set is used to teach the model, and the test set is used to check its predictions.
**Distribution of Quality based on Training Data**
```{r}
set.seed(123)
# Train-test split
idx <- createDataPartition(df$quality, p = 0.7, list = FALSE)
train <- df[idx, ]
test <- df[-idx, ]
# Ensure quality is factor
train$quality <- as.factor(train$quality)
test$quality <- as.factor(test$quality)
# Set baseline to class "2"
train$quality <- relevel(train$quality, ref = "1")
test$quality <- relevel(test$quality, ref = "1")
kable(table(train$quality),
col.names = c("Quality", "Frequency"),
align = c("c", "c"))
```
\
\
**Distribution of Quality based on Test Data**
```{r}
kable(table(test$quality),
col.names = c("Quality", "Frequency"),
align = c("c", "c"))
```
Results
=======================================================================
Column {.tabset data-width=500}
-----------------------------------------------------------------------
### Model 1
```{r}
form_full <- quality ~ melt_temp + mold_temp + fill_time + plast_time +
cycle_time + close_force + clamp_peak + torque_peak + torque_mean +
back_press + inj_press + screw_pos + shot_vol
mlr_full <- multinom(form_full, data = train, trace = FALSE)
s <- summary(mlr_full)
coef_mat <- s$coefficients # rows: non-baseline classes, cols: intercept + predictors
se_mat <- s$standard.errors
# z and p
z_mat <- coef_mat / se_mat
p_mat <- 2 * (1 - pnorm(abs(z_mat)))
# Convert to long format table
coef_df <- as.data.frame(coef_mat)
coef_df$class <- rownames(coef_mat)
se_df <- as.data.frame(se_mat) ; se_df$class <- rownames(se_mat)
z_df <- as.data.frame(z_mat) ; z_df$class <- rownames(z_mat)
p_df <- as.data.frame(p_mat) ; p_df$class <- rownames(p_mat)
full_long <- coef_df |>
pivot_longer(-class, names_to = "term", values_to = "estimate") |>
left_join(
se_df |> pivot_longer(-class, names_to = "term", values_to = "std.error"),
by = c("class", "term")
) |>
left_join(
z_df |> pivot_longer(-class, names_to = "term", values_to = "z.value"),
by = c("class", "term")
) |>
left_join(
p_df |> pivot_longer(-class, names_to = "term", values_to = "p.value"),
by = c("class", "term")
) |>
arrange(class, term)
# Optional: nicer labels and rounding
coef_table <- full_long |>
mutate(
term = dplyr::recode(term,
`(Intercept)` = "Intercept",
melt_temp = "Melt temperature",
mold_temp = "Mold temperature",
fill_time = "Fill time",
plast_time = "Plasticizing time",
cycle_time = "Cycle time",
close_force = "Closing force",
clamp_peak = "Clamping force peak",
torque_peak = "Torque peak",
torque_mean = "Torque mean",
back_press = "Back pressure",
inj_press = "Injection pressure",
screw_pos = "Screw position",
shot_vol = "Shot volume"
),
estimate = round(estimate, 2),
std.error = round(std.error, 3),
z.value = round(z.value, 3),
p.value = signif(p.value, 3)
)
datatable(coef_table,
options = list(pageLength = 14, scrollX = TRUE))
```
### CM 1
```{r}
pred_class <- predict(mlr_full, newdata = test)
pred_prob <- predict(mlr_full, newdata = test, type = "prob")
cm <- confusionMatrix(
data = factor(pred_class, levels = levels(test$quality)),
reference = test$quality
)
cm
```
### Model 2
```{r}
form_reduced <- quality ~ melt_temp + mold_temp + fill_time + plast_time +
cycle_time + close_force + clamp_peak +back_press + inj_press + screw_pos + shot_vol
mlr_reduced <- multinom(form_reduced, data = train, trace = FALSE)
summary(mlr_reduced)
```
### CM 2
```{r}
pred_class <- predict(mlr_reduced, newdata = test)
pred_prob <- predict(mlr_reduced, newdata = test, type = "prob")
cm <- confusionMatrix(
data = factor(pred_class, levels = levels(test$quality)),
reference = test$quality
)
cm
```
### Goodness of Fit
```{r}
# Null model (intercept only)
null_model <- multinom(quality ~ 1, data = train, trace = FALSE)
# Full model (make sure the object name matches your fitted model)
LL_full <- logLik(mlr_reduced)
LL_null <- logLik(null_model)
# Likelihood ratio statistic
G2 <- -2 * (as.numeric(LL_null) - as.numeric(LL_full))
# Degrees of freedom difference
df1 <- attr(LL_full, "df") - attr(LL_null, "df")
# p-value
p_value <- pchisq(G2, df = df1, lower.tail = FALSE)
# Display as tibble
tibble(
Test = "Likelihood Ratio (Null vs Full)",
G2 = round(G2, 3),
df = df1,
p_value = signif(p_value, 3)
) |>
kable()
pR2(mlr_reduced)
```
### Overall Performance
#### ROC Curves
Below, we see the ROC Curves for each class of Quality.
```{r, fig.align='center'}
# 1. Compute ROC objects for each class (one-vs-all)
classes <- levels(test$quality)
roc_list <- lapply(classes, function(cl) {
roc(
response = as.numeric(test$quality == cl),
predictor = pred_prob[, cl],
quiet = TRUE
)
})
names(roc_list) <- classes
# 2. Tidy data frame for ggplot
roc_df <- purrr::map2_dfr(
roc_list, names(roc_list),
~ tibble(
class = .y,
specificity = rev(.x$specificities),
sensitivity = rev(.x$sensitivities)
)
)
# 3. Faceted ROC plot: one nice panel per class
ggplot(roc_df, aes(x = 1 - specificity, y = sensitivity)) +
geom_line(linewidth = 1) +
geom_abline(slope = 1, intercept = 0, linetype = "dashed") +
facet_wrap(~ class, nrow = 1) +
coord_equal() +
theme_minimal(base_size = 13) +
labs(
title = "ROC Curves for Each Quality Class",
x = "False Positive Rate (1 - Specificity)",
y = "True Positive Rate (Sensitivity)"
) +
theme(
strip.text = element_text(face = "bold"),
plot.title = element_text(face = "bold"),
legend.position = "none"
)
```
#### AUC values for each class + macro-average
```{r}
auc_vec <- sapply(roc_list, auc)
tibble(
Class = names(auc_vec),
AUC = round(as.numeric(auc_vec), 3)
) |>
add_row(
Class = "Macro-Avg",
AUC = round(mean(as.numeric(auc_vec)), 3)
) |>
kable()
```
Column {data-width=500}
-----------------------------------------------------------------------
### Discussion
Conclusion
=======================================================================
Column {data-width=500}
-----------------------------------------------------------------------
### Conclusion
### Limitation
Column {data-width=500}
-----------------------------------------------------------------------
### References
### About the Author